142 research outputs found

    Enhanced SPARQL-based design rationale retrieval

    Get PDF
    Design rationale (DR) is an important category within design knowledge, and effective reuse of it depends on its successful retrieval. In this paper, an ontology-based DR retrieval approach is presented, which allows users to search by entering normal queries such as questions in natural language. First, an ontology-based semantic model of DR is developed based on the extended issue-based information system-based DR representation in order to effectively utilize the semantics embedded in DR, and a database of ontology-based DR is constructed, which supports SPARQL queries. Second, two SPARQL query generation methods are proposed. The first method generates initial SPARQL queries from natural language queries automatically using template matching, and the other generates initial SPARQL queries automatically from DR record-based queries. In addition, keyword extension and optimization is conducted to enhance the SPARQL-based retrieval. Third, a design rationale retrieval prototype system is implemented. The experimental results show the advantages of the proposed approach

    StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving

    Full text link
    Most existing chain-of-thought (CoT) prompting methods suffer from the issues of generalizability and consistency, as they often rely on instance-specific solutions that may not be applicable to other cases and lack task-level consistency in their reasoning steps. To address these limitations, we propose a comprehensive framework, StrategyLLM, harnessing the capabilities of LLMs to tackle various tasks. The framework improves generalizability by formulating general problem-solving strategies and enhances consistency by producing consistent solutions using these strategies. StrategyLLM employs four LLM-based agents: strategy generator, executor, optimizer, and evaluator, working together to generate, evaluate, and select promising strategies for a given task automatically. The experimental results demonstrate that StrategyLLM outperforms the competitive baseline CoT-SC that requires human-annotated solutions on 13 datasets across 4 challenging tasks without human involvement, including math reasoning (39.2% β†’\rightarrow 43.3%), commonsense reasoning (70.3% β†’\rightarrow 72.5%), algorithmic reasoning (51.7% β†’\rightarrow 62.0%), and symbolic reasoning (30.0% β†’\rightarrow 79.2%)

    Doping Mesoporous Materials with Multicolor Quantum Dots

    Full text link
    • …
    corecore